分散的学习算法可以通过在不同设备和位置生成的大型分布式数据集对深度学习模型进行培训,而无需中央服务器。在实际情况下,分布式数据集可以在整个代理之间具有显着不同的数据分布。当前的最新分散算法主要假设数据分布是独立且分布相同的(IID)。本文的重点是用最小的计算和内存开销来改善非IID数据分布的分散学习。我们提出了邻居梯度聚类(NGC),这是一种新型的分散学习算法,使用自我和交叉梯度信息修改每个代理的局部梯度。特别是,所提出的方法用自级的加权平均值,模型变化的跨梯度(接收到的邻居模型参数相对于本地数据集的衍生物)和数据变化,将模型的局部梯度取代了模型变化的均值平均值交叉梯度(相对于其邻居数据集的本地模型的衍生物)。此外,我们提出了compngc,这是NGC的压缩版本,通过压缩交叉梯度将通信开销降低了$ 32 \ times $。我们证明了所提出的技术在各种模型体系结构和图形拓扑上采样的非IID数据分布上提出的技术的经验收敛性和效率。我们的实验表明,NGC和COMPNGC的表现优于现有的最先进的(SOTA)去中心化学习算法,而不是非IID数据的$ 1-5 \%$,其计算和内存需求明显降低。此外,我们还表明,所提出的NGC方法的表现优于$ 5-40 \%$,而没有其他交流。
translated by 谷歌翻译
分散的分布式学习是利用私有用户生成的本地数据在边缘设备上启用大规模机器学习(训练)的关键,而不依赖于云。然而,实际实现这种设备培训受到通信瓶颈的限制,训练深层模型的计算复杂性和跨设备的显着数据分布偏差。在文献中提出了许多基于反馈的压缩技术,以降低通信成本,并且通过提高收敛速率,少数作品提出算法改变,以帮助存在偏斜数据分布的性能。据我们所知,文献中没有工作,适用并显示计算有效的训练技术这种量化,修剪等,用于对等对等分散的学习设置。在本文中,我们分析并展示了低精度分散培训的趋同,旨在降低培训和推论的计算复杂性。此外,我们研究偏斜和通信压缩程度对各种计算机视觉和自然语言处理(NLP)任务的低精度分散训练的影响。我们的实验表明,与其全面的数据相比,8位分散的训练与其完整的精密对手相比,即使具有异质数据,也具有最小的精度损失。但是,当通过稀疏的沟通压缩伴随着低精度训练时,我们观察1-2%的准确性。所提出的低精度分散培训减少了计算复杂性,内存使用量和通信成本,同时交易低于IID和非IID数据的1%准确性。特别是具有更高的偏斜值,我们观察精度增加(〜0.5%),具有低精度训练,表明量化的正则化效果。
translated by 谷歌翻译
The management of cattle over a huge area is still a challenging problem in the farming sector. With evolution in technology, Unmanned aerial vehicles (UAVs) with consumer level digital cameras are becoming a popular alternative to manual animal censuses for livestock estimation since they are less risky and expensive.This paper evaluated and compared the cutting-edge object detection algorithms, YOLOv7,RetinaNet with ResNet50 backbone, RetinaNet with EfficientNet and mask RCNN. It aims to improve the occlusion problem that is to detect hidden cattle from a huge dataset captured by drones using deep learning algorithms for accurate cattle detection. Experimental results showed YOLOv7 was superior with precision of 0.612 when compared to the other two algorithms. The proposed method proved superior to the usual competing algorithms for cow face detection, especially in very difficult cases.
translated by 谷歌翻译
Recent advances in neural radiance fields have enabled the high-fidelity 3D reconstruction of complex scenes for novel view synthesis. However, it remains underexplored how the appearance of such representations can be efficiently edited while maintaining photorealism. In this work, we present PaletteNeRF, a novel method for photorealistic appearance editing of neural radiance fields (NeRF) based on 3D color decomposition. Our method decomposes the appearance of each 3D point into a linear combination of palette-based bases (i.e., 3D segmentations defined by a group of NeRF-type functions) that are shared across the scene. While our palette-based bases are view-independent, we also predict a view-dependent function to capture the color residual (e.g., specular shading). During training, we jointly optimize the basis functions and the color palettes, and we also introduce novel regularizers to encourage the spatial coherence of the decomposition. Our method allows users to efficiently edit the appearance of the 3D scene by modifying the color palettes. We also extend our framework with compressed semantic features for semantic-aware appearance editing. We demonstrate that our technique is superior to baseline methods both quantitatively and qualitatively for appearance editing of complex real-world scenes.
translated by 谷歌翻译
The rapid growth of machine translation (MT) systems has necessitated comprehensive studies to meta-evaluate evaluation metrics being used, which enables a better selection of metrics that best reflect MT quality. Unfortunately, most of the research focuses on high-resource languages, mainly English, the observations for which may not always apply to other languages. Indian languages, having over a billion speakers, are linguistically different from English, and to date, there has not been a systematic study of evaluating MT systems from English into Indian languages. In this paper, we fill this gap by creating an MQM dataset consisting of 7000 fine-grained annotations, spanning 5 Indian languages and 7 MT systems, and use it to establish correlations between annotator scores and scores obtained using existing automatic metrics. Our results show that pre-trained metrics, such as COMET, have the highest correlations with annotator scores. Additionally, we find that the metrics do not adequately capture fluency-based errors in Indian languages, and there is a need to develop metrics focused on Indian languages. We hope that our dataset and analysis will help promote further research in this area.
translated by 谷歌翻译
Legal contracts, such as employment or lease agreements, are important documents as they govern the obligations and entitlements of the various contracting parties. However, these documents are typically long and written in legalese resulting in lots of manual hours spent in understanding them. In this paper, we address the task of summarizing legal contracts for each of the contracting parties, to enable faster reviewing and improved understanding of them. Specifically, we collect a dataset consisting of pairwise importance comparison annotations by legal experts for ~293K sentence pairs from lease agreements. We propose a novel extractive summarization system to automatically produce a summary consisting of the most important obligations, entitlements, and prohibitions in a contract. It consists of two modules: (1) a content categorize to identify sentences containing each of the categories (i.e., obligation, entitlement, and prohibition) for a party, and (2) an importance ranker to compare the importance among sentences of each category for a party to obtain a ranked list. The final summary is produced by selecting the most important sentences of a category for each of the parties. We demonstrate the effectiveness of our proposed system by comparing it against several text ranking baselines via automatic and human evaluation.
translated by 谷歌翻译
We present POTATO, the Portable text annotation tool, a free, fully open-sourced annotation system that 1) supports labeling many types of text and multimodal data; 2) offers easy-to-configure features to maximize the productivity of both deployers and annotators (convenient templates for common ML/NLP tasks, active learning, keypress shortcuts, keyword highlights, tooltips); and 3) supports a high degree of customization (editable UI, inserting pre-screening questions, attention and qualification tests). Experiments over two annotation tasks suggest that POTATO improves labeling speed through its specially-designed productivity features, especially for long documents and complex tasks. POTATO is available at https://github.com/davidjurgens/potato and will continue to be updated.
translated by 谷歌翻译
Flooding is one of the most disastrous natural hazards, responsible for substantial economic losses. A predictive model for flood-induced financial damages is useful for many applications such as climate change adaptation planning and insurance underwriting. This research assesses the predictive capability of regressors constructed on the National Flood Insurance Program (NFIP) dataset using neural networks (Conditional Generative Adversarial Networks), decision trees (Extreme Gradient Boosting), and kernel-based regressors (Gaussian Process). The assessment highlights the most informative predictors for regression. The distribution for claims amount inference is modeled with a Burr distribution permitting the introduction of a bias correction scheme and increasing the regressor's predictive capability. Aiming to study the interaction with physical variables, we incorporate Daymet rainfall estimation to NFIP as an additional predictor. A study on the coastal counties in the eight US South-West states resulted in an $R^2=0.807$. Further analysis of 11 counties with a significant number of claims in the NFIP dataset reveals that Extreme Gradient Boosting provides the best results, that bias correction significantly improves the similarity with the reference distribution, and that the rainfall predictor strengthens the regressor performance.
translated by 谷歌翻译
Recent advancements in sensing and communication facilitate obtaining high-frequency real-time data from various physical systems like power networks, climate systems, biological networks, etc. However, since the data are recorded by physical sensors, it is natural that the obtained data is corrupted by measurement noise. In this paper, we present a novel algorithm for online real-time learning of dynamical systems from noisy time-series data, which employs the Robust Koopman operator framework to mitigate the effect of measurement noise. The proposed algorithm has three main advantages: a) it allows for online real-time monitoring of a dynamical system; b) it obtains a linear representation of the underlying dynamical system, thus enabling the user to use linear systems theory for analysis and control of the system; c) it is computationally fast and less intensive than the popular Extended Dynamic Mode Decomposition (EDMD) algorithm. We illustrate the efficiency of the proposed algorithm by applying it to identify the Van der Pol oscillator, the IEEE 68 bus system, and a ring network of Van der Pol oscillators.
translated by 谷歌翻译
Athletes routinely undergo fitness evaluations to evaluate their training progress. Typically, these evaluations require a trained professional who utilizes specialized equipment like force plates. For the assessment, athletes perform drop and squat jumps, and key variables are measured, e.g. velocity, flight time, and time to stabilization, to name a few. However, amateur athletes may not have access to professionals or equipment that can provide these assessments. Here, we investigate the feasibility of estimating key variables using video recordings. We focus on jump velocity as a starting point because it is highly correlated with other key variables and is important for determining posture and lower-limb capacity. We find that velocity can be estimated with a high degree of precision across a range of athletes, with an average R-value of 0.71 (SD = 0.06).
translated by 谷歌翻译